140 research outputs found

    Dissipative systems: uncontrollability, observability and RLC realizability

    Full text link
    The theory of dissipativity has been primarily developed for controllable systems/behaviors. For various reasons, in the context of uncontrollable systems/behaviors, a more appropriate definition of dissipativity is in terms of the dissipation inequality, namely the {\em existence} of a storage function. A storage function is a function such that along every system trajectory, the rate of increase of the storage function is at most the power supplied. While the power supplied is always expressed in terms of only the external variables, whether or not the storage function should be allowed to depend on unobservable/hidden variables also has various consequences on the notion of dissipativity: this paper thoroughly investigates the key aspects of both cases, and also proposes another intuitive definition of dissipativity. We first assume that the storage function can be expressed in terms of the external variables and their derivatives only and prove our first main result that, assuming the uncontrollable poles are unmixed, i.e. no pair of uncontrollable poles add to zero, and assuming a strictness of dissipativity at the infinity frequency, the dissipativities of a system and its controllable part are equivalent. We also show that the storage function in this case is a static state function. We then investigate the utility of unobservable/hidden variables in the definition of storage function: we prove that lossless autonomous behaviors require storage function to be unobservable from external variables. We next propose another intuitive definition: a behavior is called dissipative if it can be embedded in a controllable dissipative {\em super-behavior}. We show that this definition imposes a constraint on the number of inputs and thus explains unintuitive examples from the literature in the context of lossless/orthogonal behaviors.Comment: 26 pages, one figure. Partial results appeared in an IFAC conference (World Congress, Milan, Italy, 2011

    Fundamental Aspects of the ISM Fractality

    Get PDF
    The ubiquitous clumpy state of the ISM raises a fundamental and open problem of physics, which is the correct statistical treatment of systems dominated by long range interactions. A simple solvable hierarchical model is presented which explains why systems dominated by gravity prefer to adopt a fractal dimension around 2 or less, like the cold ISM and large scale structures. This has direct relation with the general transparency, or blackness, of the Universe.Comment: 6 pages, LaTeX2e, crckapb macro, no figure, uuencoded compressed tar file. To be published in the proceeedings of the "Dust-Morphology" conference, Johannesburg, 22-26 January, 1996, D. Block (ed.), (Kluwer Dordrecht

    On the role of different Skyrme forces and surface corrections in exotic cluster-decay

    Full text link
    We present cluster decay studies of 56^{56}Ni^* formed in heavy-ion collisions using different Skyrme forces. Our study reveals that different Skyrme forces do not alter the transfer structure of fractional yields significantly. The cluster decay half-lives of different clusters lies within \pm 10% for PCM and \pm 15% for UFM.Comment: 13 pages,6 figures and 1 table; in press Pramana Journal of Physics (2010

    Decision Making for Inconsistent Expert Judgments Using Negative Probabilities

    Full text link
    In this paper we provide a simple random-variable example of inconsistent information, and analyze it using three different approaches: Bayesian, quantum-like, and negative probabilities. We then show that, at least for this particular example, both the Bayesian and the quantum-like approaches have less normative power than the negative probabilities one.Comment: 14 pages, revised version to appear in the Proceedings of the QI2013 (Quantum Interactions) conferenc

    Epistemic and Ontic Quantum Realities

    Get PDF
    Quantum theory has provoked intense discussions about its interpretation since its pioneer days. One of the few scientists who have been continuously engaged in this development from both physical and philosophical perspectives is Carl Friedrich von Weizsaecker. The questions he posed were and are inspiring for many, including the authors of this contribution. Weizsaecker developed Bohr's view of quantum theory as a theory of knowledge. We show that such an epistemic perspective can be consistently complemented by Einstein's ontically oriented position

    Classical kinetic energy, quantum fluctuation terms and kinetic-energy functionals

    Get PDF
    We employ a recently formulated dequantization procedure to obtain an exact expression for the kinetic energy which is applicable to all kinetic-energy functionals. We express the kinetic energy of an N-electron system as the sum of an N-electron classical kinetic energy and an N-electron purely quantum kinetic energy arising from the quantum fluctuations that turn the classical momentum into the quantum momentum. This leads to an interesting analogy with Nelson's stochastic approach to quantum mechanics, which we use to conceptually clarify the physical nature of part of the kinetic-energy functional in terms of statistical fluctuations and in direct correspondence with Fisher Information Theory. We show that the N-electron purely quantum kinetic energy can be written as the sum of the (one-electron) Weizsacker term and an (N-1)-electron kinetic correlation term. We further show that the Weizsacker term results from local fluctuations while the kinetic correlation term results from the nonlocal fluctuations. For one-electron orbitals (where kinetic correlation is neglected) we obtain an exact (albeit impractical) expression for the noninteracting kinetic energy as the sum of the classical kinetic energy and the Weizsacker term. The classical kinetic energy is seen to be explicitly dependent on the electron phase and this has implications for the development of accurate orbital-free kinetic-energy functionals. Also, there is a direct connection between the classical kinetic energy and the angular momentum and, across a row of the periodic table, the classical kinetic energy component of the noninteracting kinetic energy generally increases as Z increases.Comment: 10 pages, 1 figure. To appear in Theor Chem Ac

    Massive stars as thermonuclear reactors and their explosions following core collapse

    Full text link
    Nuclear reactions transform atomic nuclei inside stars. This is the process of stellar nucleosynthesis. The basic concepts of determining nuclear reaction rates inside stars are reviewed. How stars manage to burn their fuel so slowly most of the time are also considered. Stellar thermonuclear reactions involving protons in hydrostatic burning are discussed first. Then I discuss triple alpha reactions in the helium burning stage. Carbon and oxygen survive in red giant stars because of the nuclear structure of oxygen and neon. Further nuclear burning of carbon, neon, oxygen and silicon in quiescent conditions are discussed next. In the subsequent core-collapse phase, neutronization due to electron capture from the top of the Fermi sea in a degenerate core takes place. The expected signal of neutrinos from a nearby supernova is calculated. The supernova often explodes inside a dense circumstellar medium, which is established due to the progenitor star losing its outermost envelope in a stellar wind or mass transfer in a binary system. The nature of the circumstellar medium and the ejecta of the supernova and their dynamics are revealed by observations in the optical, IR, radio, and X-ray bands, and I discuss some of these observations and their interpretations.Comment: To be published in " Principles and Perspectives in Cosmochemistry" Lecture Notes on Kodai School on Synthesis of Elements in Stars; ed. by Aruna Goswami & Eswar Reddy, Springer Verlag, 2009. Contains 21 figure

    Random-phase approximation and its applications in computational chemistry and materials science

    Full text link
    The random-phase approximation (RPA) as an approach for computing the electronic correlation energy is reviewed. After a brief account of its basic concept and historical development, the paper is devoted to the theoretical formulations of RPA, and its applications to realistic systems. With several illustrating applications, we discuss the implications of RPA for computational chemistry and materials science. The computational cost of RPA is also addressed which is critical for its widespread use in future applications. In addition, current correction schemes going beyond RPA and directions of further development will be discussed.Comment: 25 pages, 11 figures, published online in J. Mater. Sci. (2012
    corecore